尽管有广泛的可用性,但由于采集过程的投射性质,从明亮场显微镜(BFM)中获取的体积信息固有地很困难。我们从一组BFM Z-stack图像中研究了3D细胞实例的预测。我们提出了一种新型的两阶段弱监督方法,用于细胞的体积实例分割,这仅需要近似细胞质心注释。因此,创建的伪标签是通过Z-stack Guidance进行了新颖的改进损失来完善的。评估表明,我们的方法不仅可以推广到BFM Z-stack数据,还可以将其他3D单元成像模式推广到。我们的管道与完全监督的方法的比较表明,减少数据收集和标记的显着增益会导致较小的性能差异。
translated by 谷歌翻译
视网膜手术是一种复杂的医疗程序,需要特殊的专业知识和灵巧。为此目的,目前正在开发几种机器人平台,以实现或改善显微外科任务的结果。由于这种机器人的控制通常被设计用于在视网膜附近导航,成功的套管针对接并将仪器插入眼睛中代表了一种额外的认知努力,因此是机器人视网膜手术中的开放挑战之一。为此目的,我们为自主套管针对接的平台结合了计算机愿景和机器人设置。灵感来自古巴Colibri(蜂鸟)使用只使用视觉将其喙对齐,我们将相机安装到机器人系统的内逸线器上。通过估计套管针的位置和姿势,机器人能够自主地对齐并导航仪器朝向贸易圈的入口点(TEP),最后执行插入。我们的实验表明,该方法能够精确地估计套管针的位置和姿势,实现可重复的自主对接。这项工作的目的是降低机器人设置准备在手术任务之前的复杂性,因此增加了系统集成到临床工作流程的直观。
translated by 谷歌翻译
Differentiable Search Indices (DSIs) encode a corpus of documents in the parameters of a model and use the same model to map queries directly to relevant document identifiers. Despite the strong performance of DSI models, deploying them in situations where the corpus changes over time is computationally expensive because reindexing the corpus requires re-training the model. In this work, we introduce DSI++, a continual learning challenge for DSI to incrementally index new documents while being able to answer queries related to both previously and newly indexed documents. Across different model scales and document identifier representations, we show that continual indexing of new documents leads to considerable forgetting of previously indexed documents. We also hypothesize and verify that the model experiences forgetting events during training, leading to unstable learning. To mitigate these issues, we investigate two approaches. The first focuses on modifying the training dynamics. Flatter minima implicitly alleviate forgetting, so we optimize for flatter loss basins and show that the model stably memorizes more documents (+12\%). Next, we introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task. Extensive experiments on novel continual indexing benchmarks based on Natural Questions (NQ) and MS MARCO demonstrate that our proposed solution mitigates forgetting by a significant margin. Concretely, it improves the average Hits@10 by $+21.1\%$ over competitive baselines for NQ and requires $6$ times fewer model updates compared to re-training the DSI model for incrementally indexing five corpora in a sequence.
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译
Chromosome analysis is essential for diagnosing genetic disorders. For hematologic malignancies, identification of somatic clonal aberrations by karyotype analysis remains the standard of care. However, karyotyping is costly and time-consuming because of the largely manual process and the expertise required in identifying and annotating aberrations. Efforts to automate karyotype analysis to date fell short in aberration detection. Using a training set of ~10k patient specimens and ~50k karyograms from over 5 years from the Fred Hutchinson Cancer Center, we created a labeled set of images representing individual chromosomes. These individual chromosomes were used to train and assess deep learning models for classifying the 24 human chromosomes and identifying chromosomal aberrations. The top-accuracy models utilized the recently introduced Topological Vision Transformers (TopViTs) with 2-level-block-Toeplitz masking, to incorporate structural inductive bias. TopViT outperformed CNN (Inception) models with >99.3% accuracy for chromosome identification, and exhibited accuracies >99% for aberration detection in most aberrations. Notably, we were able to show high-quality performance even in "few shot" learning scenarios. Incorporating the definition of clonality substantially improved both precision and recall (sensitivity). When applied to "zero shot" scenarios, the model captured aberrations without training, with perfect precision at >50% recall. Together these results show that modern deep learning models can approach expert-level performance for chromosome aberration detection. To our knowledge, this is the first study demonstrating the downstream effectiveness of TopViTs. These results open up exciting opportunities for not only expediting patient results but providing a scalable technology for early screening of low-abundance chromosomal lesions.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
我们提出了多语言数据集的Multiconer,用于命名实体识别,涵盖11种语言的3个域(Wiki句子,问题和搜索查询),以及多语言和代码混合子集。该数据集旨在代表NER中的当代挑战,包括低文字方案(简短和未添加的文本),句法复杂的实体(例如电影标题)和长尾实体分布。使用基于启发式的句子采样,模板提取和插槽以及机器翻译等技术,从公共资源中汇编了26M令牌数据集。我们在数据集上应用了两个NER模型:一个基线XLM-Roberta模型和一个最先进的Gemnet模型,该模型利用了Gazetteers。基线实现了中等的性能(Macro-F1 = 54%),突出了我们数据的难度。 Gemnet使用Gazetteers,显着改善(Macro-F1 =+30%的平均改善)。甚至对于大型预训练的语言模型,多功能人也会构成挑战,我们认为它可以帮助进一步研究建立强大的NER系统。 Multiconer可在https://registry.opendata.aws/multiconer/上公开获取,我们希望该资源将有助于推进NER各个方面的研究。
translated by 谷歌翻译
避免在监督学习中过度拟合的一种常见方法是尽早停止,在训练期间,将持有的设置用于迭代评估,以在训练步骤数量中找到最大概括的训练步骤。但是,这样的方法需要一个不相交的验证集,因此通常为此目的遗漏了训练集的标记数据的一部分,当训练数据稀缺时,这并不理想。此外,当训练标签嘈杂时,模型在验证集中的性能可能不是准确的概括代理。在本文中,我们提出了一种方法,可以在训练迭代中发现早期停止点而无需进行验证集。我们首先表明,在过度参数化的方向上,线性模型的随机初始化权重在训练过程中收敛到同一方向。使用此结果,我们建议训练用不同随机种子初始初始化的线性模型的两个平行实例,并使用它们的交点作为信号来检测过度拟合。为了检测相交,我们在训练迭代过程中使用平行模型的重量之间的余弦距离。注意到NN的最后一层是输出逻辑的前层层激活的线性图,我们使用反事实权重的新概念来建立线性模型的标准,并提出向多层网络的扩展。我们对两个领域进行实验,这些领域的早期停止对防止过度拟合NN具有明显的影响:(i)从嘈杂的标签中学习; (ii)学习在IR中排名。我们在四个广泛使用的数据集上进行的实验证实了我们的概括方法的有效性。对于广泛的学习率,我们的方法称为余弦距离标准(CDC),比我们几乎在所有测试的情况下与所有方法相比的所有方法平均得出更好的概括。
translated by 谷歌翻译
道德框架和情感会影响各种在线和离线行为,包括捐赠,亲环境行动,政治参与,甚至参与暴力抗议活动。自然语言处理中的各种计算方法(NLP)已被用来从文本数据中检测道德情绪,但是为了在此类主观任务中取得更好的性能,需要大量的手工注销训练数据。事实证明,以前对道德情绪注释的语料库已被证明是有价值的,并且在NLP和整个社会科学中都产生了新的见解,但仅限于Twitter。为了促进我们对道德修辞的作用的理解,我们介绍了道德基础Reddit语料库,收集了16,123个reddit评论,这些评论已从12个不同的子雷迪维特策划,由至少三个训练有素的注释者手工注释,用于8种道德情绪(即护理,相称性,平等,纯洁,权威,忠诚,瘦道,隐含/明确的道德)基于更新的道德基础理论(MFT)框架。我们使用一系列方法来为这种新的语料库(例如跨域分类和知识转移)提供基线道德句子分类结果。
translated by 谷歌翻译
变压器模型的缩放属性引起了很多兴趣。但是,在研究不同电感偏差和模型体系结构的缩放特性的效果的前提下,没有做太多事情。模型体系结构的规模不同吗?如果是这样,归纳偏置如何影响缩放行为?这如何影响上游(预训练)和下游(转移)?本文对十种不同模型体系结构的缩放行为进行了系统研究,例如变压器,交换机变压器,通用变压器,动态卷积,表演者以及最近提出的MLP混合物。通过广泛的实验,我们表明(1)架构在执行缩放时确实是一个重要的考虑因素,并且(2)最佳性能模型可以在不同的尺度上波动。我们认为,这项工作中概述的发现对当前在社区中评估模型架构的方式具有重要意义。
translated by 谷歌翻译